Gemini in Gmail Found Vulnerable to Prompt Injection Phishing Attacks, Researcher Reports
The Google AI called Gemini in Gmail was found to have a security vulnerability that would be exploited to inject immediate acts by the attacker, according to the researchers. This bug gives the attacker a chance to control the reaction of Gemini in order to support phishing attacks just within the email correspondence which is a great threat to the customers.
Highlights:
- Malicious prompts hidden within emails exploit Gemini.
- Attackers trick Gemini into generating phishing replies.
- The AI unintentionally validates fraudulent sender requests.
- This method bypasses conventional email security filters.
Attackers can hide commands in emails as confirmed by researchers. The prompts use Gemini injection vulnerability prompts to make the AI think that malicious requests are harmless. In turn, Gemini may compose messages agreeing with confidential information or sharing phishing links, which may seem to be ordinary AI operations in Gmail.
Security is circumvented as the malicious communication comes following a first level of filtering, you can use the fact that Gemini is trusted and appears to be inside the user inbox. By authenticating a request made by an attacker, the compromised AI adds false credibility to the phishing attack thus placing the user at much high risk.
Google is taking note of the report and is working on the Gemini prompt injection methods. Ernest and other experts warn Gmail users to be very careful with the unexpected requests going through Gemini and especially those engaging sensitive data or action until effective counter measures against Gemini code injection are in place.